Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you should use R Markdown syntax.
# This is a so-called "R chunk" where you can write R code.
#version
date()
## [1] "Tue Nov 17 20:50:40 2020"
First time combining the use of the following tools for this Introduction to Open Data Science (IODS) course:
Feeling very excited to embrace the future!
IODS project GitHub repository: https://github.com/mlammins/IODS-project
“By definition all scientists are data scientists. In my opinion, they are half hacker, half analyst, they use data to build products and find insights. It’s Columbus meet Columbo ― starry-eyed explorers and skeptical detectives.” ―Monica Rogati, Independent Data Science Advisor
Describe the work you have done this week and summarize your learning.
The background of the data as described by the author:
Kimmo Vehkalahti: ASSIST 2014 - Phase 3 (end of Part 2), N=183 Course: Johdatus yhteiskuntatilastotieteeseen, syksy 2014 (Introduction to Social Statistics, fall 2014 - in Finnish), international survey of Approaches to Learning, made possible by Teachers’ Academy funding for KV in 2013-2015.
Data collected: 3.12.2014 - 10.1.2015/KV Data created: 14.1.2015/KV, in English 9.4.2015/KV,Florence,Italy Imputation 4.4.2015: only missing information in certain backgrounds, minimal amount of missing values imputed using Phases 1 and 2.
For more information, see https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS3-meta.txt
learning2014 <- read.csv("./data/learning2014.csv") # reading the analysis data
dim(learning2014) # number of rows and columns
## [1] 166 7
str(learning2014) # type of data
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
The learning dataset used in this exercise consists of 166 observations and 7 variables. Variables deep, stra and surf were calculated based on several Likert scale questions (from 1 to 5). Variable names and short descriptions:
To get the idea of the data, let’s make a graphical summary of the variables with females (red) and males (blue):
library(ggplot2);
library(GGally);
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
p <- ggpairs(learning2014, mapping = aes(col=gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))
p # graphical summary
summary(learning2014) # numerical summary
## gender age attitude deep
## Length:166 Min. :17.00 Min. :1.400 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333
## Mode :character Median :22.00 Median :3.200 Median :3.667
## Mean :25.51 Mean :3.143 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083
## Max. :55.00 Max. :5.000 Max. :4.917
## stra surf points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
Taking a look at the graphical overview, the distribution for female (red) and male (blue) values seems to be relatively similar in all categories. Females seem to have a slightly higher values in surface learning (surf) and strategic learning (stra) and slighly lower values in attitude. The numerical summary of all data (not filtered by gender) shows that the majority of participants is young (under 30 years of age).
Next, let’s test if there is correlation between attitude, strategic learning (stra) and surface learning tendency (surf) with the number of point obtained from the exam:
library(ggplot2)
# create a regression model with multiple explanatory variables
my_model <- lm(points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
Coefficients table gives an interpretation of the model. Intecept giving the “baseline” for the points in exam, it seems that attitude has the biggest correlation between exam points. If attitude rises one unit, then exam points increase by 3.4 units given that all other variables stay the same. The effect of stra and surf is below one. The importance of attitude can also be seen in the last column where the statistical significance of the coefficient is given. The three stars *** indicate high statistical significance, i.e. the coefficient differs from zero and thus has a relationship with the target variable. In general, the P-test value shown here tells the propability of the coefficient being zero.
Let’s drop surf (highest propability) to see if it improves the fitting:
# create a regression model with multiple explanatory variables
my_model2 <- lm(points ~ attitude + stra, data = learning2014)
# print out a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.9729 2.3959 3.745 0.00025 ***
## attitude 3.4658 0.5652 6.132 6.31e-09 ***
## stra 0.9137 0.5345 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Leaving out surface learning increases the significance of the remaining variables (decreased values in P-value column). However, the value for strategic learning (stra) is still relatively high (debatable). As it is less than < 0.10, let’s keep it in the model. So we have found our final model.
Our final model is thus
exam points = 8.97 + 3.47 x attitude + 0.91 x stra
This means that with one unit increase in attitude exam points increase by 3.47 points (stra being unchanged) and with one unit increase in strategic learning (stra) the exam points increase by 0.91. The baseline for exam points is 8.97 (the y-intercept). Now this is the systemic part (no error).
The numbers at the end of the multiple regression summary can be explained succinctly as:
So adjusted R-squared tells how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared seen here (roughly 0.20) is quite low so there is probably some problematic patterns in the residual plots. At least the residuals are not very close to the regression line. That’s why you cannot rely on the R-squared number alone but a visual inspection is a must!
# drawing diagnostic plots using the plot() function. Choose the plots 1, 2 and 5:
par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))
A statistical model always include assumptions which describe the data generation process. In this linear regression case we assume
These assumptions can be checked through analyzing residuals.
In residuals vs. fitted we can see the source of the low R-squared value: the residuals are not very close to the regression line (note y-axis scale). However, they seem to describe the trend in observations well since there is no noticeable pattern. QQ-plot also shows that the residuals are nicely located on the line, i.e. the normality assumption holds. Residuals vs. leverage shows some points at the right hand side of the plot but the x-axis scaling (max leverage value of 0.05) is still quite small, so no problems here either. All in all, it seems our multiple regression model nicely describes the data and all the assumptions hold.
With the model validated, now we could use our regression model to predict the target variable behavior!
Describe the work you have done this week and summarize your learning.
The dataset describes student achievement in secondary education of two Portuguese schools. The dataset is joined from two datasets regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). The data attributes include student grades, demographic, social and school related features (and especially alcohol consumption) and it was collected by using school reports and questionnaires.
More information: https://archive.ics.uci.edu/ml/datasets/Student+Performance
alc <- read.csv("./data/alc.csv") # reading the analysis data
dim(alc) # number of rows and columns
## [1] 382 35
str(alc) # type of data
## 'data.frame': 382 obs. of 35 variables:
## $ school : chr "GP" "GP" "GP" "GP" ...
## $ sex : chr "F" "F" "F" "F" ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : chr "U" "U" "U" "U" ...
## $ famsize : chr "GT3" "GT3" "LE3" "GT3" ...
## $ Pstatus : chr "A" "T" "T" "T" ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : chr "at_home" "at_home" "at_home" "health" ...
## $ Fjob : chr "teacher" "other" "other" "services" ...
## $ reason : chr "course" "course" "other" "home" ...
## $ nursery : chr "yes" "no" "yes" "yes" ...
## $ internet : chr "no" "yes" "yes" "yes" ...
## $ guardian : chr "mother" "father" "mother" "mother" ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : chr "yes" "no" "yes" "no" ...
## $ famsup : chr "no" "yes" "no" "yes" ...
## $ paid : chr "no" "no" "yes" "yes" ...
## $ activities: chr "no" "no" "no" "yes" ...
## $ higher : chr "yes" "yes" "yes" "yes" ...
## $ romantic : chr "no" "no" "no" "yes" ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The final joined dataset has 382 observations and 35 variables consisting of only unique individuals. The datasets were joined by using the 13 student identifier variables: “school”, “sex”, “age”, “address”, “famsize”, “Pstatus”, “Medu”, “Fedu”, “Mjob”, “Fjob”, “reason”, “nursery” and “internet”. Only students present in both datasets were kept. The variables not used for joining the two data have been combined by averaging (including the grade variables). A more detailed information about the variables is presented below (possible values in parenthesis):
The grades G1, G2 and G3 are related to the course subject, Math or Portuguese. Variables alc_use and high_use were added to the original datasets:
The purpose of the analysis is to study the relationships between high/low alcohol consumption and some of the other variables in the data. Out of the many variables present, parent’s cohabitation status, mother’s education, quality of family relationships and the number of school absences were chosen. Thus the study hypothesis are as follows:
Note! Now the target variable high_use is a binary variable (TRUE=1, FALSE=0) so we must use logistic regression.
Before making logistic regression, let’s explore the distribution of chosen variables and their connection to the target variable numerically and graphically. First, let’s take a look at variable distributions:
library(dplyr);
# use dplyr to make a smaller dataset and include all chosen variables
alc_test <- select(alc, Pstatus, Medu, famrel, absences, alc_use, high_use)
# change Pstatus from char to factor
alc_test$Pstatus <- as.factor(alc_test$Pstatus)
# numerical summary of chosen variables
summary(alc_test[-c(5,6)])
## Pstatus Medu famrel absences
## A: 38 Min. :0.000 Min. :1.000 Min. : 0.0
## T:344 1st Qu.:2.000 1st Qu.:4.000 1st Qu.: 1.0
## Median :3.000 Median :4.000 Median : 3.0
## Mean :2.806 Mean :3.937 Mean : 4.5
## 3rd Qu.:4.000 3rd Qu.:5.000 3rd Qu.: 6.0
## Max. :4.000 Max. :5.000 Max. :45.0
# graphical exploration of variable distribution
par(mfrow = c(2,2))
#
barplot(table(alc_test$Pstatus), main="Distribution of Pstatus")
barplot(table(alc_test$Medu), main="Distribution of Medu")
barplot(table(alc_test$famrel), main="Distribution of famrel")
barplot(table(alc_test$absences), main="Distribution of absences")
From the data we can see that the vast majority of participants are living together with their parents (T). Also, most of the participants have good family relations (famrel=4) and the number of absences is relatively small (75 % less than 6). Mother’s education is almost evenly distributed among variable values not counting zero.
Out of curiosity, let’s see how the alcohol consumption is distributed.
summary(alc_test[c(5,6)])
## alc_use high_use
## Min. :1.000 Mode :logical
## 1st Qu.:1.000 FALSE:268
## Median :1.500 TRUE :114
## Mean :1.889
## 3rd Qu.:2.500
## Max. :5.000
par(mfrow = c(1,2))
barplot(table(alc_test$alc_use), main="Distribution of alc_use")
barplot(table(alc_test$high_use), main="Distribution of high_use")
It seems that only about one third of the participants use alcohol in high volumes. To better grasp the situation, here also the alcohol use (alc_use, numeric from 1 to 5) is shown instead of only the binary variable high use (high_use, TRUE if alc_use > 2).
Now let’s see what is the relation of our chosen variables to alcohol consumption. Here also the numerical alc_use is used.
par(mfrow = c(2,2))
boxplot(alc_use ~ Pstatus, data = alc)
boxplot(alc_use ~ Medu, data=alc)
boxplot(alc_use ~ famrel, data=alc)
boxplot(alc_use ~ absences, data=alc)
These boxplots give a rough idea of the relationships between variables and the target variable:
Time to form a mathematically rigorous model using logistic regression! Note that Pstatus will be addressed here as factor. To summarize:
Target variable = high_use
Chosen variables: Pstatus, Medu, famrel, absences
m <- glm(high_use ~ Pstatus+Medu+famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m)
##
## Call:
## glm(formula = high_use ~ Pstatus + Medu + famrel + absences,
## family = "binomial", data = alc_test)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2741 -0.8107 -0.7076 1.1985 1.8620
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.453190 0.697544 -0.650 0.515890
## PstatusT 0.168029 0.397078 0.423 0.672176
## Medu -0.008959 0.107430 -0.083 0.933542
## famrel -0.243649 0.124088 -1.964 0.049585 *
## absences 0.088049 0.022951 3.836 0.000125 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 443.46 on 377 degrees of freedom
## AIC: 453.46
##
## Number of Fisher Scoring iterations: 4
The most important insight from the model summary is the coefficient section. Using the estimated coefficients, the obtained model is
high_use \(= -0.45+0.17 \times PstatusT-0.01 \times Medu-0.24 \times famrel+0.09 \times absences\)
Since the target valuable high_use is binary, it has only values 0 (as FALSE or “failure”) and 1 (as TRUE or "success) in modelling sense. The coefficients of the fitted model can be thus interpreted as
Of the variables, only famrel and absences were found to be statistically significant on 0.05 and 0.001 levels, respectively.
The model coefficients can also be interpreted as odds ratio.
Odds: the ratio of expected “successes” to “failures”, i.e. \(\frac{p}{1-p}\) with value ranging from 0 to infinity.
So higher odds correspond to a higher probability of success. They are an alternative way of expressing probabilities. Let’s calculate the odds ratios and confidence intervals for the coefficients.
# print out the coefficients of the model
coef(m)
## (Intercept) PstatusT Medu famrel absences
## -0.453190280 0.168028965 -0.008958589 -0.243649169 0.088049068
# compute odds ratios (OR)
OR <- coef(m) %>% exp
# compute confidence intervals (CI)
CI <- confint(m) %>% exp
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.6355972 0.1580208 2.462028
## PstatusT 1.1829709 0.5566928 2.673024
## Medu 0.9910814 0.8032307 1.224977
## famrel 0.7837626 0.6137127 1.000047
## absences 1.0920417 1.0461954 1.144793
Odds ratio can be used to quantify the relationship between our variable and the target variable. Odds higher than 1 mean that the variable is positively associated with “success”. The odds ratio of our variables can be interpreted as
Comparison of the results with our hypotheses:
Let’s improve our model by discarding the most insignificant variables, Pstatus and Medu. Now the final model has only variables famrel and absences.
# fix the model
m2 <- glm(high_use ~ famrel+absences, data = alc_test, family ="binomial")
# print out a summary of the model
summary(m2)
##
## Call:
## glm(formula = high_use ~ famrel + absences, family = "binomial",
## data = alc_test)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3139 -0.8028 -0.7125 1.2100 1.8605
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.33027 0.50783 -0.650 0.515461
## famrel -0.24109 0.12365 -1.950 0.051211 .
## absences 0.08668 0.02270 3.819 0.000134 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 443.66 on 379 degrees of freedom
## AIC: 449.66
##
## Number of Fisher Scoring iterations: 4
It seems that the significance of variables famrel and absences has increased slightly compared to the original model. However, the Pr(>|z|) values are almost the same and we have a simpler model than before, so this is an improvement.
Let’s use the improved model and predict high_use values of an individual.
# predict() the probability of high_use
probabilities <- predict(m2, type = "response")
# add the predicted probabilities to 'alc_test'
alc_test <- mutate(alc_test, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc_test <- mutate(alc_test, prediction = probabilities>0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc_test, famrel,absences, high_use, probability, prediction) %>% tail(10)
## famrel absences high_use probability prediction
## 373 4 0 FALSE 0.2150733 FALSE
## 374 5 7 TRUE 0.2831342 FALSE
## 375 5 1 FALSE 0.1901522 FALSE
## 376 4 6 FALSE 0.3154941 FALSE
## 377 5 2 FALSE 0.2038593 FALSE
## 378 4 2 FALSE 0.2457776 FALSE
## 379 2 2 FALSE 0.3454525 FALSE
## 380 1 3 FALSE 0.4227907 FALSE
## 381 2 4 TRUE 0.3856256 FALSE
## 382 4 2 TRUE 0.2457776 FALSE
# tabulate the target variable versus the predictions
select(alc_test, high_use, prediction) %>% table()
## prediction
## high_use FALSE TRUE
## FALSE 259 9
## TRUE 101 13
library(ggplot2)
# initialize a plot of 'high_use' versus 'probability' in 'alc_test'
g <- ggplot(alc_test, aes(x =probability, y = high_use, col=prediction))
# define the geom as points and draw the plot
g+geom_point()
# tabulate the target variable versus the predictions
table(high_use = alc_test$high_use, prediction = alc_test$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.67801047 0.02356021 0.70157068
## TRUE 0.26439791 0.03403141 0.29842932
## Sum 0.94240838 0.05759162 1.00000000
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc_test$high_use, prob = alc_test$probability)
## [1] 0.2879581
To interpret the result, the model ends up predicting wrong 29 % of the time. There is much room for improvement here although it is better than just guessing (50-50 chance). So the model is definitely better than nothing.
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc_test, cost = loss_func, glmfit = m2, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2931937
The prediction error obtained here is larger than in the model introduced in DataCamp (prediction error of 0.26). Choosing more significant variables in the model would improve the predictions.
Describe the work you have done this week and summarize your learning.
This chapter’s dataset consists of housing values in suburbs of Boston (the Boston data from the MASS package).
# access the MASS package
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset contains 506 observations in 14 variables:
Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them. (0-2 points)
library(dplyr)
library(corrplot)
## corrplot 0.84 loaded
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
pairs(Boston)
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston)
# print the correlation matrix
cor_matrix %>% round(digits=2)
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)
Standardize the dataset and print out summaries of the scaled data. How did the variables change? Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). Use the quantiles as the break points in the categorical variable. Drop the old crime rate variable from the dataset. Divide the dataset to train and test sets, so that 80% of the data belongs to the train set. (0-2 points)
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# MASS, Boston and boston_scaled are available
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE)
# look at the table of the new factor crime
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# boston_scaled is available
# number of rows in the Boston dataset
n <- nrow(Boston)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
Fit the linear discriminant analysis on the train set. Use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables. Draw the LDA (bi)plot. (0-3 points)
# MASS and train are available
# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## [-0.419,-0.411] (-0.411,-0.39] (-0.39,0.00739] (0.00739,9.92]
## 0.2524752 0.2524752 0.2475248 0.2475248
##
## Group means:
## zn indus chas nox rm
## [-0.419,-0.411] 0.88209581 -0.8508129 -0.07933396 -0.8473266 0.4062723
## (-0.411,-0.39] -0.08789431 -0.2471167 0.03646311 -0.5668584 -0.1219210
## (-0.39,0.00739] -0.39119540 0.2475560 0.23949396 0.4270446 0.1412371
## (0.00739,9.92] -0.48724019 1.0171519 -0.07547406 1.0810101 -0.4294293
## age dis rad tax ptratio
## [-0.419,-0.411] -0.8895773 0.8539187 -0.6767393 -0.7446541 -0.39472273
## (-0.411,-0.39] -0.3148318 0.3406919 -0.5393736 -0.4415848 -0.06323769
## (-0.39,0.00739] 0.4275874 -0.3913932 -0.4087862 -0.2814048 -0.24967538
## (0.00739,9.92] 0.8214577 -0.8564107 1.6377820 1.5138081 0.78037363
## black lstat medv
## [-0.419,-0.411] 0.38540909 -0.75681126 0.498189377
## (-0.411,-0.39] 0.31765298 -0.14996459 -0.006125378
## (-0.39,0.00739] 0.02127577 0.01091389 0.179099166
## (0.00739,9.92] -0.78990130 0.88686209 -0.694001749
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.09897731 0.59170971 -0.94388854
## indus 0.01873827 -0.18041635 0.24891005
## chas -0.08449136 -0.04876776 0.14105589
## nox 0.38623886 -0.86950189 -1.37536470
## rm -0.10911244 -0.17483774 -0.18615528
## age 0.21962872 -0.32251356 0.07139725
## dis -0.07872999 -0.27847459 0.19411354
## rad 3.19380274 1.02600365 -0.23873768
## tax -0.02427280 -0.04481996 0.76711044
## ptratio 0.14009292 -0.08423752 -0.35942380
## black -0.12605515 0.06675495 0.14801674
## lstat 0.28139797 -0.29464529 0.30966491
## medv 0.23977775 -0.45654221 -0.22862357
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9494 0.0374 0.0132
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 1)
Save the crime categories from the test set and then remove the categorical crime variable from the test dataset. Then predict the classes with the LDA model on the test data. Cross tabulate the results with the crime categories from the test set. Comment on the results. (0-3 points)
# lda.fit, correct_classes and test are available
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct [-0.419,-0.411] (-0.411,-0.39] (-0.39,0.00739] (0.00739,9.92]
## [-0.419,-0.411] 21 4 0 0
## (-0.411,-0.39] 6 14 4 0
## (-0.39,0.00739] 1 12 12 1
## (0.00739,9.92] 0 0 0 27
Reload the Boston dataset and standardize the dataset (we did not do this in the Datacamp exercises, but you should scale the variables to get comparable distances). Calculate the distances between the observations. Run k-means algorithm on the dataset. Investigate what is the optimal number of clusters and run the algorithm again. Visualize the clusters (for example with the pairs() or ggpairs() functions, where the clusters are separated with colors) and interpret the results. (0-4 points)
# load MASS and Boston
library(MASS)
data('Boston')
# euclidean distance matrix
dist_eu <- dist(Boston)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.119 85.624 170.539 226.315 371.950 626.047
# manhattan distance matrix
dist_man <- dist(Boston,method="manhattan")
# look at the summary of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2.016 149.145 279.505 342.899 509.707 1198.265
Bonus: Perform k-means on the original Boston data with some reasonable number of clusters (> 2). Remember to standardize the dataset. Then perform LDA using the clusters as target classes. Include all the variables in the Boston data in the LDA model. Visualize the results with a biplot (include arrows representing the relationships of the original variables to the LDA solution). Interpret the results. Which variables are the most influencial linear separators for the clusters? (0-2 points to compensate any loss of points from the above exercises)
# Boston dataset is available
library(ggplot2)
# k-means clustering
km <-kmeans(Boston, centers = 3)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
# MASS, ggplot2 and Boston dataset are available
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
# k-means clustering
km <-kmeans(Boston, centers = 2)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
Super-Bonus: Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
# Note! To install plotly in Linux, remember to install libcurl from terminal.
# * deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)
# * rpm: libcurl-devel (Fedora, CentOS, RHEL)
# * csw: libcurl_dev (Solaris)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set. Draw another 3D plot where the color is defined by the clusters of the k-means. How do the plots differ? Are there any similarities? (0-3 points to compensate any loss of points from the above exercises)
(more chapters to be added similarly as we proceed with the course!)